Large Scale Learning to Rank
نویسنده
چکیده
Pairwise learning to rank methods such as RankSVM give good performance, but suffer from the computational burden of optimizing an objective defined over O(n) possible pairs for data sets with n examples. In this paper, we remove this super-linear dependence on training set size by sampling pairs from an implicit pairwise expansion and applying efficient stochastic gradient descent learners for approximate SVMs. Results show orders-of-magnitude reduction in training time with no observable loss in ranking performance. Source code is freely available at: http://code.google.com/p/sofia-ml
منابع مشابه
Online Learning to Rank for Content-Based Image Retrieval
A major challenge in Content-Based Image Retrieval (CBIR) is to bridge the semantic gap between low-level image contents and high-level semantic concepts. Although researchers have investigated a variety of retrieval techniques using different types of features and distance functions, no single best retrieval solution can fully tackle this challenge. In a real-world CBIR task, it is often highl...
متن کاملA Software Library for Conducting Large Scale Experiments on Learning to Rank Algorithms
is paper presents an ecient application for driving large scale experiments on Learning to Rank (LtR) algorithms. We designed a soware library that exploits caching mechanisms and ecient data structures to make the execution of massime experiments on LtR algorithms as fast as possible in order to try as many combinations of components as possible. is presented soware has been tested on di...
متن کاملA Saddle Point Approach to Structured Low-rank Matrix Learning in Large-scale Applications
We propose a novel optimization approach for learning a low-rank matrix which is also constrained to lie in a linear subspace. Exploiting a particular variational characterization of the squared trace norm regularizer, we formulate the structured low-rank matrix learning problem as a rank-constrained saddle point minimax problem. The proposed modeling decouples the lowrank and structural constr...
متن کاملBILGO: Bilateral greedy optimization for large scale semidefinite programming
Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyse a new bilateral greedy optimization(denoted B...
متن کاملWeb-Scale Responsive Visual Search at Bing
In this paper, we introduce a web-scale general visual search system deployed in Microsoft Bing. The system accommodates tens of billions of images in the index, with thousands of features for each image, and can respond in less than 200 ms. In order to overcome the challenges in relevance, latency, and scalability in such large scale of data, we employ a cascaded learning-to-rank framework bas...
متن کاملActive Subspace: Toward Scalable Low-Rank Learning
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009